Modeling Musical Mood From Audio Features and Listening Context on an In-Situ Data Set
نویسندگان
چکیده
Real-life listening experiences contain a wide range of music types and genres. We create the first model of musical mood using a data set gathered in-situ during a user’s daily life. We show that while audio features, song lyrics and socially created tags can be used to successfully model musical mood with classification accuracies greater than chance, adding contextual information such as the listener’s affective state or listening context can improve classification accuracy. We successfully classify musical arousal with a classification accuracy of 67% and musical valence with an accuracy of 75% when using both musical features and listening context.
منابع مشابه
An In-Situ Study of Real-Life Listening Context
Current models of musical mood are based on clean, noiseless data that does not correspond to real-life listening experiences. We conducted an experience-sampling study collecting in-situ data of listening experiences. We show that real-life music listening experiences are far from the homogeneous experiences used in current models of musical mood.
متن کاملModeling Expressive Musical Performance with Hidden Markov Models
Modeling Expressive Musical Performance with Hidden Markov Models by Graham Charles Grindlay Although one can easily produce a literal audio rendition of a musical score, the result is often bland and emotionally dry. In this thesis, we consider the problem of modeling and synthesizing expressive piano performance. Expressive piano performance can be characterized by variations in three primary...
متن کاملBenefits of listening to a recording of euphoric joint music making in polydrug abusers
BACKGROUND AND AIMS Listening to music can have powerful physiological and therapeutic effects. Some essential features of the mental mechanism underlying beneficial effects of music are probably strong physiological and emotional associations with music created during the act of music making. Here we tested this hypothesis in a clinical population of polydrug abusers in rehabilitation listenin...
متن کاملPredicting the perception of performed dynamics in music audio with ensemble learning.
By varying the dynamics in a musical performance, the musician can convey structure and different expressions. Spectral properties of most musical instruments change in a complex way with the performed dynamics, but dedicated audio features for modeling the parameter are lacking. In this study, feature extraction methods were developed to capture relevant attributes related to spectral characte...
متن کاملAutomatisierte Extraktion rhythmischer Merkmale zur Anwendung in Music-Information Retrieval-Systemen
This thesis describes the automated extraction of features for the description of the rhythmic content of musical audio signals. These features are selected with respect to their applicability in music information retrieval (MIR) systems. While research on automatic extraction of rhythmic features, for example tempo and time signature has been in progress for some time, current algorithms still...
متن کامل